Lagrangian support vector regression via unconstrained convex minimization
نویسندگان
چکیده
In this paper, a simple reformulation of the Lagrangian dual of the 2-norm support vector regression (SVR) is proposed as an unconstrained minimization problem. This formulation has the advantage that its objective function is strongly convex and further having only m variables, where m is the number of input data points. The proposed unconstrained Lagrangian SVR (ULSVR) is solvable by computing the zeros of its gradient. However, since its objective function contains the non-smooth 'plus' function, two approaches are followed to solve the proposed optimization problem: (i) by introducing a smooth approximation, generate a slightly modified unconstrained minimization problem and solve it; (ii) solve the problem directly by applying generalized derivative. Computational results obtained on a number of synthetic and real-world benchmark datasets showing similar generalization performance with much faster learning speed in accordance with the conventional SVR and training time very close to least squares SVR clearly indicate the superiority of ULSVR solved by smooth and generalized derivative approaches.
منابع مشابه
Training Lagrangian twin support vector regression via unconstrained convex minimization
In this paper, a new unconstrained convex minimization problem formulation is proposed as the Lagrangian dual of the 2-norm twin support vector regression (TSVR). The proposed formulation leads to two smaller sized unconstrained minimization problems having their objective functions piece-wise quadratic and differentiable. It is further proposed to apply gradient based iterative method for solv...
متن کاملExact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
Support vector machines utilizing the 1-norm, typically set up as linear programs (Mangasarian, 2000; Bradley and Mangasarian, 1998), are formulated here as a completely unconstrained minimization of a convex differentiable piecewise-quadratic objective function in the dual space. The objective function, which has a Lipschitz continuous gradient and contains only one additional finite parameter...
متن کاملA fast algorithm for training support vector regression via smoothed primal function minimization
The support vector regression (SVR) model is usually fitted by solving a quadratic programming problem, which is computationally expensive. To improve the computational efficiency, we propose to directly minimize the objective function in the primal form. However, the loss function used by SVR is not differentiable, which prevents the well-developed gradient based optimization methods from bein...
متن کاملPrimal-Dual Lagrangian Transformation method for Convex Optimization
Received: date / Revised version: date Abstract. A class Ψ of strongly concave and smooth functions ψ : R → R with specific properties is used to transform the terms of the classical Lagrangian associated with the constraints. The transformation is scaled by a positive vector of scaling parameters, one for each constraint. Each step of the Lagrangian Transformation (LT) method alternates uncons...
متن کاملVarious Hyperplane Classifiers Using Kernel Feature Spaces
In machine learning the classification approach may be linear or nonlinear, but it seems that by using the so-called kernel idea, linear methods can be readily generalized to the nonlinear ones. The key idea was originally presented in Aizermann’s paper [1] and it was successfully renewed in the context of the ubiquitous Support Vector Machines (SVM) [2]. The roots of SV methods can be traced b...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neural networks : the official journal of the International Neural Network Society
دوره 51 شماره
صفحات -
تاریخ انتشار 2014